Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 49
Filtrar
Más filtros










Base de datos
Intervalo de año de publicación
1.
Behav Res Methods ; 56(3): 1433-1448, 2024 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-37326771

RESUMEN

Anonymous web-based experiments are increasingly used in many domains of behavioral research. However, online studies of auditory perception, especially of psychoacoustic phenomena pertaining to low-level sensory processing, are challenging because of limited available control of the acoustics, and the inability to perform audiometry to confirm normal-hearing status of participants. Here, we outline our approach to mitigate these challenges and validate our procedures by comparing web-based measurements to lab-based data on a range of classic psychoacoustic tasks. Individual tasks were created using jsPsych, an open-source JavaScript front-end library. Dynamic sequences of psychoacoustic tasks were implemented using Django, an open-source library for web applications, and combined with consent pages, questionnaires, and debriefing pages. Subjects were recruited via Prolific, a subject recruitment platform for web-based studies. Guided by a meta-analysis of lab-based data, we developed and validated a screening procedure to select participants for (putative) normal-hearing status based on their responses in a suprathreshold task and a survey. Headphone use was standardized by supplementing procedures from prior literature with a binaural hearing task. Individuals meeting all criteria were re-invited to complete a range of classic psychoacoustic tasks. For the re-invited participants, absolute thresholds were in excellent agreement with lab-based data for fundamental frequency discrimination, gap detection, and sensitivity to interaural time delay and level difference. Furthermore, word identification scores, consonant confusion patterns, and co-modulation masking release effect also matched lab-based studies. Our results suggest that web-based psychoacoustics is a viable complement to lab-based research. Source code for our infrastructure is provided.


Asunto(s)
Percepción Auditiva , Audición , Humanos , Psicoacústica , Audición/fisiología , Percepción Auditiva/fisiología , Audiometría , Internet , Umbral Auditivo/fisiología , Estimulación Acústica
2.
bioRxiv ; 2023 Sep 22.
Artículo en Inglés | MEDLINE | ID: mdl-37790457

RESUMEN

The auditory system is unique among sensory systems in its ability to phase lock to and precisely follow very fast cycle-by-cycle fluctuations in the phase of sound-driven cochlear vibrations. Yet, the perceptual role of this temporal fine structure (TFS) code is debated. This fundamental gap is attributable to our inability to experimentally manipulate TFS cues without altering other perceptually relevant cues. Here, we circumnavigated this limitation by leveraging individual differences across 200 participants to systematically compare variations in TFS sensitivity to performance in a range of speech perception tasks. Results suggest that robust TFS sensitivity does not confer additional masking release from pitch or spatial cues, but appears to confer resilience against the effects of reverberation. Yet, across conditions, we also found that greater TFS sensitivity is associated with faster response times, consistent with reduced listening effort. These findings highlight the perceptual significance of TFS coding for everyday hearing.

3.
Commun Biol ; 6(1): 981, 2023 09 26.
Artículo en Inglés | MEDLINE | ID: mdl-37752215

RESUMEN

The auditory system has exquisite temporal coding in the periphery which is transformed into a rate-based code in central auditory structures, like auditory cortex. However, the cortex is still able to synchronize, albeit at lower modulation rates, to acoustic fluctuations. The perceptual significance of this cortical synchronization is unknown. We estimated physiological synchronization limits of cortex (in humans with electroencephalography) and brainstem neurons (in chinchillas) to dynamic binaural cues using a novel system-identification technique, along with parallel perceptual measurements. We find that cortex can synchronize to dynamic binaural cues up to approximately 10 Hz, which aligns well with our measured limits of perceiving dynamic spatial information and utilizing dynamic binaural cues for spatial unmasking, i.e. measures of binaural sluggishness. We also find that the tracking limit for frequency modulation (FM) is similar to the limit for spatial tracking, demonstrating that this sluggish tracking is a more general perceptual limit that can be accounted for by cortical temporal integration limits.


Asunto(s)
Corteza Auditiva , Percepción del Tiempo , Humanos , Acústica , Tronco Encefálico , Sincronización Cortical
4.
Hum Brain Mapp ; 44(17): 5810-5827, 2023 12 01.
Artículo en Inglés | MEDLINE | ID: mdl-37688547

RESUMEN

Cerebellar differences have long been documented in autism spectrum disorder (ASD), yet the extent to which such differences might impact language processing in ASD remains unknown. To investigate this, we recorded brain activity with magnetoencephalography (MEG) while ASD and age-matched typically developing (TD) children passively processed spoken meaningful English and meaningless Jabberwocky sentences. Using a novel source localization approach that allows higher resolution MEG source localization of cerebellar activity, we found that, unlike TD children, ASD children showed no difference between evoked responses to meaningful versus meaningless sentences in right cerebellar lobule VI. ASD children also had atypically weak functional connectivity in the meaningful versus meaningless speech condition between right cerebellar lobule VI and several left-hemisphere sensorimotor and language regions in later time windows. In contrast, ASD children had atypically strong functional connectivity for in the meaningful versus meaningless speech condition between right cerebellar lobule VI and primary auditory cortical areas in an earlier time window. The atypical functional connectivity patterns in ASD correlated with ASD severity and the ability to inhibit involuntary attention. These findings align with a model where cerebro-cerebellar speech processing mechanisms in ASD are impacted by aberrant stimulus-driven attention, which could result from atypical temporal information and predictions of auditory sensory events by right cerebellar lobule VI.


Asunto(s)
Trastorno del Espectro Autista , Niño , Humanos , Trastorno del Espectro Autista/diagnóstico por imagen , Magnetoencefalografía , Cerebelo/diagnóstico por imagen , Imagen por Resonancia Magnética , Mapeo Encefálico
5.
J Neurosci Methods ; 398: 109954, 2023 10 01.
Artículo en Inglés | MEDLINE | ID: mdl-37625650

RESUMEN

BACKGROUND: Disabling hearing loss affects nearly 466 million people worldwide (World Health Organization). The auditory brainstem response (ABR) is the most common non-invasive clinical measure of evoked potentials, e.g., as an objective measure for universal newborn hearing screening. In research, the ABR is widely used for estimating hearing thresholds and cochlear synaptopathy in animal models of hearing loss. The ABR contains multiple waves representing neural activity across different peripheral auditory pathway stages, which arise within the first 10 ms after stimulus onset. Multi-channel (e.g., 32 or higher) caps provide robust measures for a wide variety of EEG applications for the study of human hearing. However, translational studies using preclinical animal models typically rely on only a few subdermal electrodes. NEW METHOD: We evaluated the feasibility of a 32-channel rodent EEG mini-cap for improving the reliability of ABR measures in chinchillas, a common model of human hearing. RESULTS: After confirming initial feasibility, a systematic experimental design tested five potential sources of variability inherent to the mini-cap methodology. We found each source of variance minimally affected mini-cap ABR waveform morphology, thresholds, and wave-1 amplitudes. COMPARISON WITH EXISTING METHOD: The mini-cap methodology was statistically more robust and less variable than the conventional subdermal-needle methodology, most notably when analyzing ABR thresholds. Additionally, fewer repetitions were required to produce a robust ABR response when using the mini-cap. CONCLUSIONS: These results suggest the EEG mini-cap can improve translational studies of peripheral auditory evoked responses. Future work will evaluate the potential of the mini-cap to improve the reliability of more centrally evoked (e.g., cortical) EEG responses.


Asunto(s)
Sordera , Pérdida Auditiva , Animales , Recién Nacido , Humanos , Potenciales Evocados Auditivos del Tronco Encefálico/fisiología , Chinchilla , Ruido , Reproducibilidad de los Resultados , Umbral Auditivo/fisiología , Pérdida Auditiva/diagnóstico , Electroencefalografía , Estimulación Acústica
6.
IEEE Trans Pattern Anal Mach Intell ; 45(11): 14052-14054, 2023 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-37402186

RESUMEN

A recent paper claims that a newly proposed method classifies EEG data recorded from subjects viewing ImageNet stimuli better than two prior methods. However, the analysis used to support that claim is based on confounded data. We repeat the analysis on a large new dataset that is free from that confound. Training and testing on aggregated supertrials derived by summing trials demonstrates that the two prior methods achieve statistically significant above-chance accuracy while the newly proposed method does not.

7.
Sci Rep ; 13(1): 10216, 2023 06 23.
Artículo en Inglés | MEDLINE | ID: mdl-37353552

RESUMEN

Neurophysiological studies suggest that intrinsic brain oscillations influence sensory processing, especially of rhythmic stimuli like speech. Prior work suggests that brain rhythms may mediate perceptual grouping and selective attention to speech amidst competing sound, as well as more linguistic aspects of speech processing like predictive coding. However, we know of no prior studies that have directly tested, at the single-trial level, whether brain oscillations relate to speech-in-noise outcomes. Here, we combined electroencephalography while simultaneously measuring intelligibility of spoken sentences amidst two different interfering sounds: multi-talker babble or speech-shaped noise. We find that induced parieto-occipital alpha (7-15 Hz; thought to modulate attentional focus) and frontal beta (13-30 Hz; associated with maintenance of the current sensorimotor state and predictive coding) oscillations covary with trial-wise percent-correct scores; importantly, alpha and beta power provide significant independent contributions to predicting single-trial behavioral outcomes. These results can inform models of speech processing and guide noninvasive measures to index different neural processes that together support complex listening.


Asunto(s)
Inteligibilidad del Habla , Percepción del Habla , Percepción del Habla/fisiología , Ruido , Percepción Auditiva , Electroencefalografía
8.
J Acoust Soc Am ; 153(4): 2482, 2023 04 01.
Artículo en Inglés | MEDLINE | ID: mdl-37092950

RESUMEN

Physiological and psychoacoustic studies of the medial olivocochlear reflex (MOCR) in humans have often relied on long duration elicitors (>100 ms). This is largely due to previous research using otoacoustic emissions (OAEs) that found multiple MOCR time constants, including time constants in the 100s of milliseconds, when elicited by broadband noise. However, the effect of the duration of a broadband noise elicitor on similar psychoacoustic tasks is currently unknown. The current study measured the effects of ipsilateral broadband noise elicitor duration on psychoacoustic gain reduction estimated from a forward-masking paradigm. Analysis showed that both masker type and elicitor duration were significant main effects, but no interaction was found. Gain reduction time constants were ∼46 ms for the masker present condition and ∼78 ms for the masker absent condition (ranging from ∼29 to 172 ms), both similar to the fast time constants reported in the OAE literature (70-100 ms). Maximum gain reduction was seen for elicitor durations of ∼200 ms. This is longer than the 50-ms duration which was found to produce maximum gain reduction with a tonal on-frequency elicitor. Future studies of gain reduction may use 150-200 ms broadband elicitors to maximally or near-maximally stimulate the MOCR.


Asunto(s)
Cóclea , Emisiones Otoacústicas Espontáneas , Humanos , Psicoacústica , Cóclea/fisiología , Emisiones Otoacústicas Espontáneas/fisiología , Reflejo/fisiología , Factores de Tiempo , Estimulación Acústica , Enmascaramiento Perceptual/fisiología
9.
J Autism Dev Disord ; 2023 Mar 17.
Artículo en Inglés | MEDLINE | ID: mdl-36932270

RESUMEN

Auditory steady-state response (ASSR) has been studied as a potential biomarker for abnormal auditory sensory processing in autism spectrum disorder (ASD), with mixed results. Motivated by prior somatosensory findings of group differences in inter-trial coherence (ITC) between ASD and typically developing (TD) individuals at twice the steady-state stimulation frequency, we examined ASSR at 25 and 50 as well as 43 and 86 Hz in response to 25-Hz and 43-Hz auditory stimuli, respectively, using magnetoencephalography. Data were recorded from 22 ASD and 31 TD children, ages 6-17 years. ITC measures showed prominent ASSRs at the stimulation and double frequencies, without significant group differences. These results do not support ASSR as a robust ASD biomarker of abnormal auditory processing in ASD. Furthermore, the previously observed atypical double-frequency somatosensory response in ASD did not generalize to the auditory modality. Thus, the hypothesis about modality-independent abnormal local connectivity in ASD was not supported.

10.
Neuroimage Clin ; 37: 103336, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-36724734

RESUMEN

Individuals with autism spectrum disorder (ASD) commonly display speech processing abnormalities. Binding of acoustic features of speech distributed across different frequencies into coherent speech objects is fundamental in speech perception. Here, we tested the hypothesis that the cortical processing of bottom-up acoustic cues for speech binding may be anomalous in ASD. We recorded magnetoencephalography while ASD children (ages 7-17) and typically developing peers heard sentences of sine-wave speech (SWS) and modulated SWS (MSS) where binding cues were restored through increased temporal coherence of the acoustic components and the introduction of harmonicity. The ASD group showed increased long-range feedforward functional connectivity from left auditory to parietal cortex with concurrent decreased local functional connectivity within the parietal region during MSS relative to SWS. As the parietal region has been implicated in auditory object binding, our findings support our hypothesis of atypical bottom-up speech binding in ASD. Furthermore, the long-range functional connectivity correlated with behaviorally measured auditory processing abnormalities, confirming the relevance of these atypical cortical signatures to the ASD phenotype. Lastly, the group difference in the local functional connectivity was driven by the youngest participants, suggesting that impaired speech binding in ASD might be ameliorated upon entering adolescence.


Asunto(s)
Trastorno del Espectro Autista , Humanos , Trastorno del Espectro Autista/diagnóstico por imagen , Señales (Psicología) , Habla , Magnetoencefalografía , Percepción Auditiva
11.
bioRxiv ; 2023 May 22.
Artículo en Inglés | MEDLINE | ID: mdl-36712081

RESUMEN

Neurophysiological studies suggest that intrinsic brain oscillations influence sensory processing, especially of rhythmic stimuli like speech. Prior work suggests that brain rhythms may mediate perceptual grouping and selective attention to speech amidst competing sound, as well as more linguistic aspects of speech processing like predictive coding. However, we know of no prior studies that have directly tested, at the single-trial level, whether brain oscillations relate to speech-in-noise outcomes. Here, we combined electroencephalography while simultaneously measuring intelligibility of spoken sentences amidst two different interfering sounds: multi-talker babble or speech-shaped noise. We find that induced parieto-occipital alpha (7-15 Hz; thought to modulate attentional focus) and frontal beta (13-30 Hz; associated with maintenance of the current sensorimotor state and predictive coding) oscillations covary with trial-wise percent-correct scores; importantly, alpha and beta power provide significant independent contributions to predicting single-trial behavioral outcomes. These results can inform models of speech processing and guide noninvasive measures to index different neural processes that together support complex listening.

12.
Commun Biol ; 5(1): 733, 2022 07 22.
Artículo en Inglés | MEDLINE | ID: mdl-35869142

RESUMEN

Animal models suggest that cochlear afferent nerve endings may be more vulnerable than sensory hair cells to damage from acoustic overexposure and aging. Because neural degeneration without hair-cell loss cannot be detected in standard clinical audiometry, whether such damage occurs in humans is hotly debated. Here, we address this debate through co-ordinated experiments in at-risk humans and a wild-type chinchilla model. Cochlear neuropathy leads to large and sustained reductions of the wideband middle-ear muscle reflex in chinchillas. Analogously, human wideband reflex measures revealed distinct damage patterns in middle age, and in young individuals with histories of high acoustic exposure. Analysis of an independent large public dataset and additional measurements using clinical equipment corroborated the patterns revealed by our targeted cross-species experiments. Taken together, our results suggest that cochlear neural damage is widespread even in populations with clinically normal hearing.


Asunto(s)
Cóclea , Células Ciliadas Auditivas , Estimulación Acústica , Animales , Chinchilla , Células Ciliadas Auditivas/fisiología , Audición , Humanos , Persona de Mediana Edad
13.
J Acoust Soc Am ; 151(5): 3116, 2022 05.
Artículo en Inglés | MEDLINE | ID: mdl-35649891

RESUMEN

Acoustics research involving human participants typically takes place in specialized laboratory settings. Listening studies, for example, may present controlled sounds using calibrated transducers in sound-attenuating or anechoic chambers. In contrast, remote testing takes place outside of the laboratory in everyday settings (e.g., participants' homes). Remote testing could provide greater access to participants, larger sample sizes, and opportunities to characterize performance in typical listening environments at the cost of reduced control of environmental conditions, less precise calibration, and inconsistency in attentional state and/or response behaviors from relatively smaller sample sizes and unintuitive experimental tasks. The Acoustical Society of America Technical Committee on Psychological and Physiological Acoustics launched the Task Force on Remote Testing (https://tcppasa.org/remotetesting/) in May 2020 with goals of surveying approaches and platforms available to support remote testing and identifying challenges and considerations for prospective investigators. The results of this task force survey were made available online in the form of a set of Wiki pages and summarized in this report. This report outlines the state-of-the-art of remote testing in auditory-related research as of August 2021, which is based on the Wiki and a literature search of papers published in this area since 2020, and provides three case studies to demonstrate feasibility during practice.


Asunto(s)
Acústica , Percepción Auditiva , Atención/fisiología , Humanos , Estudios Prospectivos , Sonido
14.
eNeuro ; 9(2)2022.
Artículo en Inglés | MEDLINE | ID: mdl-35193890

RESUMEN

Neural phase-locking to temporal fluctuations is a fundamental and unique mechanism by which acoustic information is encoded by the auditory system. The perceptual role of this metabolically expensive mechanism, the neural phase-locking to temporal fine structure (TFS) in particular, is debated. Although hypothesized, it is unclear whether auditory perceptual deficits in certain clinical populations are attributable to deficits in TFS coding. Efforts to uncover the role of TFS have been impeded by the fact that there are no established assays for quantifying the fidelity of TFS coding at the individual level. While many candidates have been proposed, for an assay to be useful, it should not only intrinsically depend on TFS coding, but should also have the property that individual differences in the assay reflect TFS coding per se over and beyond other sources of variance. Here, we evaluate a range of behavioral and electroencephalogram (EEG)-based measures as candidate individualized measures of TFS sensitivity. Our comparisons of behavioral and EEG-based metrics suggest that extraneous variables dominate both behavioral scores and EEG amplitude metrics, rendering them ineffective. After adjusting behavioral scores using lapse rates, and extracting latency or percent-growth metrics from EEG, interaural timing sensitivity measures exhibit robust behavior-EEG correlations. Together with the fact that unambiguous theoretical links can be made relating binaural measures and phase-locking to TFS, our results suggest that these "adjusted" binaural assays may be well suited for quantifying individual TFS processing.


Asunto(s)
Percepción Auditiva , Estimulación Acústica/métodos , Humanos
15.
PLoS Biol ; 20(2): e3001541, 2022 02.
Artículo en Inglés | MEDLINE | ID: mdl-35167585

RESUMEN

Organizing sensory information into coherent perceptual objects is fundamental to everyday perception and communication. In the visual domain, indirect evidence from cortical responses suggests that children with autism spectrum disorder (ASD) have anomalous figure-ground segregation. While auditory processing abnormalities are common in ASD, especially in environments with multiple sound sources, to date, the question of scene segregation in ASD has not been directly investigated in audition. Using magnetoencephalography, we measured cortical responses to unattended (passively experienced) auditory stimuli while parametrically manipulating the degree of temporal coherence that facilitates auditory figure-ground segregation. Results from 21 children with ASD (aged 7-17 years) and 26 age- and IQ-matched typically developing children provide evidence that children with ASD show anomalous growth of cortical neural responses with increasing temporal coherence of the auditory figure. The documented neurophysiological abnormalities did not depend on age, and were reflected both in the response evoked by changes in temporal coherence of the auditory scene and in the associated induced gamma rhythms. Furthermore, the individual neural measures were predictive of diagnosis (83% accuracy) and also correlated with behavioral measures of ASD severity and auditory processing abnormalities. These findings offer new insight into the neural mechanisms underlying auditory perceptual deficits and sensory overload in ASD, and suggest that temporal-coherence-based auditory scene analysis and suprathreshold processing of coherent auditory objects may be atypical in ASD.


Asunto(s)
Percepción Auditiva/fisiología , Trastorno del Espectro Autista/fisiopatología , Sincronización Cortical/fisiología , Potenciales Evocados Auditivos/fisiología , Estimulación Acústica/métodos , Adolescente , Trastorno del Espectro Autista/diagnóstico , Trastorno del Espectro Autista/psicología , Niño , Femenino , Humanos , Magnetoencefalografía/métodos , Masculino , Tiempo de Reacción/fisiología
16.
IEEE Trans Pattern Anal Mach Intell ; 44(12): 9217-9220, 2022 12.
Artículo en Inglés | MEDLINE | ID: mdl-34665721

RESUMEN

Neuroimaging experiments in general, and EEG experiments in particular, must take care to avoid confounds. A recent TPAMI paper uses data that suffers from a serious previously reported confound. We demonstrate that their new model and analysis methods do not remedy this confound, and therefore that their claims of high accuracy and neuroscience relevance are invalid.


Asunto(s)
Algoritmos , Mapeo Encefálico , Mapeo Encefálico/métodos , Encéfalo/diagnóstico por imagen , Neuroimagen , Aprendizaje , Electroencefalografía/métodos
17.
Ear Hear ; 43(3): 849-861, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-34751679

RESUMEN

OBJECTIVES: Despite the widespread use of noise reduction (NR) in modern digital hearing aids, our neurophysiological understanding of how NR affects speech-in-noise perception and why its effect is variable is limited. The current study aimed to (1) characterize the effect of NR on the neural processing of target speech and (2) seek neural determinants of individual differences in the NR effect on speech-in-noise performance, hypothesizing that an individual's own capability to inhibit background noise would inversely predict NR benefits in speech-in-noise perception. DESIGN: Thirty-six adult listeners with normal hearing participated in the study. Behavioral and electroencephalographic responses were simultaneously obtained during a speech-in-noise task in which natural monosyllabic words were presented at three different signal-to-noise ratios, each with NR off and on. A within-subject analysis assessed the effect of NR on cortical evoked responses to target speech in the temporal-frontal speech and language brain regions, including supramarginal gyrus and inferior frontal gyrus in the left hemisphere. In addition, an across-subject analysis related an individual's tolerance to noise, measured as the amplitude ratio of auditory-cortical responses to target speech and background noise, to their speech-in-noise performance. RESULTS: At the group level, in the poorest signal-to-noise ratio condition, NR significantly increased early supramarginal gyrus activity and decreased late inferior frontal gyrus activity, indicating a switch to more immediate lexical access and less effortful cognitive processing, although no improvement in behavioral performance was found. The across-subject analysis revealed that the cortical index of individual noise tolerance significantly correlated with NR-driven changes in speech-in-noise performance. CONCLUSIONS: NR can facilitate speech-in-noise processing despite no improvement in behavioral performance. Findings from the current study also indicate that people with lower noise tolerance are more likely to get more benefits from NR. Overall, results suggest that future research should take a mechanistic approach to NR outcomes and individual noise tolerance.


Asunto(s)
Audífonos , Percepción del Habla , Adulto , Humanos , Ruido , Relación Señal-Ruido , Habla , Percepción del Habla/fisiología
18.
J Acoust Soc Am ; 150(3): 2230, 2021 09.
Artículo en Inglés | MEDLINE | ID: mdl-34598642

RESUMEN

A fundamental question in the neuroscience of everyday communication is how scene acoustics shape the neural processing of attended speech sounds and in turn impact speech intelligibility. While it is well known that the temporal envelopes in target speech are important for intelligibility, how the neural encoding of target-speech envelopes is influenced by background sounds or other acoustic features of the scene is unknown. Here, we combine human electroencephalography with simultaneous intelligibility measurements to address this key gap. We find that the neural envelope-domain signal-to-noise ratio in target-speech encoding, which is shaped by masker modulations, predicts intelligibility over a range of strategically chosen realistic listening conditions unseen by the predictive model. This provides neurophysiological evidence for modulation masking. Moreover, using high-resolution vocoding to carefully control peripheral envelopes, we show that target-envelope coding fidelity in the brain depends not only on envelopes conveyed by the cochlea, but also on the temporal fine structure (TFS), which supports scene segregation. Our results are consistent with the notion that temporal coherence of sound elements across envelopes and/or TFS influences scene analysis and attentive selection of a target sound. Our findings also inform speech-intelligibility models and technologies attempting to improve real-world speech communication.


Asunto(s)
Inteligibilidad del Habla , Percepción del Habla , Estimulación Acústica , Acústica , Percepción Auditiva , Humanos , Enmascaramiento Perceptual , Relación Señal-Ruido
19.
Prog Neurobiol ; 203: 102077, 2021 08.
Artículo en Inglés | MEDLINE | ID: mdl-34033856

RESUMEN

Autism spectrum disorder (ASD) is associated with widespread receptive language impairments, yet the neural mechanisms underlying these deficits are poorly understood. Neuroimaging has shown that processing of socially-relevant sounds, including speech and non-speech, is atypical in ASD. However, it is unclear how the presence of lexical-semantic meaning affects speech processing in ASD. Here, we recorded magnetoencephalography data from individuals with ASD (N = 22, ages 7-17, 4 females) and typically developing (TD) peers (N = 30, ages 7-17, 5 females) during unattended listening to meaningful auditory speech sentences and meaningless jabberwocky sentences. After adjusting for age, ASD individuals showed stronger responses to meaningless jabberwocky sentences than to meaningful speech sentences in the same left temporal and parietal language regions where TD individuals exhibited stronger responses to meaningful speech. Maturational trajectories of meaningful speech responses were atypical in temporal, but not parietal, regions in ASD. Temporal responses were associated with ASD severity, while parietal responses were associated with aberrant involuntary attentional shifting in ASD. Our findings suggest a receptive speech processing dysfunction in ASD, wherein unattended meaningful speech elicits abnormal engagement of the language system, while unattended meaningless speech, filtered out in TD individuals, engages the language system through involuntary attention capture.


Asunto(s)
Trastorno del Espectro Autista , Adolescente , Atención , Percepción Auditiva , Niño , Femenino , Humanos , Lenguaje , Magnetoencefalografía , Masculino
20.
Neuron ; 109(6): 909-911, 2021 03 17.
Artículo en Inglés | MEDLINE | ID: mdl-33735611

RESUMEN

Human studies of potential effects of cochlear neurodegeneration on perception have focused on impoverished input coding as the driver, with mixed results. A new study instead points to altered brain dynamics in noise as the proximal cause of hearing difficulties.


Asunto(s)
Corteza Auditiva , Pérdida Auditiva Provocada por Ruido , Cóclea , Audición , Humanos , Ruido
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...